35 research outputs found

    Hardware-based smart camera for recovering high dynamic range video from multiple exposures

    No full text
    International audienceIn many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and rep- resentation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 mega- pixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and sur- veillanc

    HDR-ARtiSt: A 1280x1024-pixel Adaptive Real-time Smart camera for High Dynamic Range video

    No full text
    International audienceStandard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight.The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex 6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 Ă— 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture from the sensor with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams

    High Dynamic Range Adaptive Real-time Smart Camera: an overview of the HDR-ARTiST project

    No full text
    International audienceStandard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor

    Extensions matérielles pour processeurs embarqués de traitement d'images.

    Get PDF
    12pLe marché des imageurs embarqués est entré dans une ère nouvelle avec l'avènement des téléphones portables munis d'appareils photographiques et de caméras. Il est attendu qu'à l'horizon 2009, leur nombre dépassera celui de l'ensemble des appareils photos vendus depuis l'invention de la photographie, et ce qu'ils soient numériques ou non. Le marché des imageurs électroniques embarqués est donc un secteur porteur, notamment au travers de la téléphonie et de la visiophonie mobile. Les applications ne sont plus limitées à la simple photographie ou la transmission de vidéo ; les lecteurs de codes matrices, la reconnaissance de visages, la biométrie, ou la vision 3D sont des exemples parmi les très nombreuses applications émergentes. L'implémentation de ces applications au sein de dispositifs mobiles requiert une grande flexibilité des composants que les IP dédiées largement utilisées jusqu'alors ne permettent pas. C'est pourquoi des solutions basées sur de processeurs programmables s'avèrent indispensables. Nous proposons dans ce papier des extensions destinées à améliorer les performances des processeurs dédiés au traitement d'image, nous démontrons que ces extensions apportent des améliorations de 60% sur l'ensemble de la chaîne d'acquisition et d'amélioration d'images utilisées derrière le capteur vidéo, ce qui indique le potentiel de ce type d'unité de calcul pour le support des applications à venir

    A high speed programmable focal-plane SIMD vision chip. Analog Integrated Circuits and Signal Processing

    Get PDF
    International audienceA high speed analog VLSI image acquisition and low-level image processing system is presented. The architecture of the chip is based on a dynamically reconfigurable SIMD processor array. The chip features a massively parallel architecture enabling the computation of programmable mask-based image processing in each pixel. Each pixel include a photodiode, an amplifier, two storage capacitors, and an analog arithmetic unit based on a four-quadrant multiplier architecture. A 64 × 64 pixel proof-of-concept chip was fabricated in a 0.35 μm standard CMOS process, with a pixel size of 35 μm × 35 μm. The chip can capture raw images up to 10,000 fps and runs low-level image processing at a framerate of 2,000–5,000 fp

    A 10 000 fps CMOS Sensor With Massively Parallel Image Processing

    Get PDF
    International audienceA high-speed analog VLSI image acquisition and preprocessing system has been designed and fabricated in a 0.35 standard CMOS process. The chip features a massively parallel architecture enabling the computation of programmable low-level image processing in each pixel. Extraction of spatial gradients and convolutions such as Sobel or Laplacian filters are implemented on the circuit. Measured results show that the proposed sensor successfully captures raw images up to 10 000 frames per second and runs low-level image processing at a frame rate of 2000 to 5000 frames per second

    Overview of ghost correction for HDR video stream generation

    No full text
    International audienceMost digital cameras use low dynamic range image sensors, these LDR sensors can capture only a limited luminance dynamic range of the scene[1], to about two orders of magnitude (about 256 to 1024 levels). However, the dynamic range of real-world scenes varies over several orders of magnitude (10.000 levels). To overcome this limitation, several methods exist for creating high dynamic range (HDR) image (expensive method uses dedicated HDR image sensor and low-cost solutions using a conventional LDR image sensor). Large number of low-cost solutions applies a temporal exposure bracketing. The HDR image may be constructed with a HDR standard method (an additional step called tone mapping is required to display the HDR image on conventional system), or by fusing LDR images in different exposures time directly, providing HDR-like[2] images which can be handled directly by LDR image monitors. Temporal exposure bracketing solution is used for static scenes but it cannot be applied directly for dynamic scenes or HDR videos since camera or object motion in bracketed exposures creates artifacts called ghost[3], in HDR image. There are a several technics allowing the detection and removing ghost artifacts (Variance based ghost detection, Entropy based ghost detection, Bitmap based ghost detection, Graph-Cuts based ghost detection …) [4], nevertheless most of these methods are expensive in calculating time and they cannot be considered for real-time implementations. The originality and the final goal of our work are to upgrade our current smart camera allowing HDR video stream generation with a sensor full-resolution (1280x1024) at 60 fps [5]. The HDR stream is performed using exposure bracketing techniques (obtained with conventional LDR image sensor) combined with a tone mapping algorithm. In this paper, we propose an overview of the different methods to correct ghost artifacts which are available in the state of art. The selection of algorithms is done concerning our final goal which is real-time hardware implementation of the ghost detection and removing phases.

    Nouvelle génération de systèmes de vision temps réel à grande dynamique

    Get PDF
    Cette thèse s intègre dans le cadre du projet européen EUREKA "High Dynamic Range - Low NoiseCMOS imagers", qui a pour but de développer de nouvelles approches de fabrication de capteursd images CMOS à haute performance. L objectif de la thèse est la conception d un système de visiontemps réel à grande gamme dynamique (HDR). L axe principal sera la reconstruction, en temps réelet à la cadence du capteur (60 images/sec), d une vidéo à grande dynamique sur une architecturede calcul embarquée.La plupart des capteurs actuels produisent une image numérique qui n est pas capable de reproduireles vraies échelles d intensités lumineuses du monde réel. De la même manière, les écrans, impri-mantes et afficheurs courants ne permettent pas la restitution effective d une gamme tonale étendue.L approche envisagée dans cette thèse est la capture multiple d images acquises avec des tempsd exposition différents permettant de palier les limites des dispositifs actuels.Afin de concevoir un système capable de s adapter temporellement aux conditions lumineuses,l étude d algorithmes dédiés à la grande dynamique, tels que les techniques d auto exposition, dereproduction de tons, en passant par la génération de cartes de radiances est réalisée. Le nouveausystème matériel de type "smart caméra" est capable de capturer, générer et restituer du contenu àgrande dynamique dans un contexte de parallélisation et de traitement des flux vidéos en temps réelThis thesis is a part of the EUREKA European project called "High Dynamic Range - Low NoiseCMOS imagers", which developped new approaches to design high performance CMOS sensors.The purpose of this thesis is to design a real-time high dynamic range (HDR) vision system. Themain focus will be the real-time video reconstruction at 60 frames/sec in an embedded architecture.Most of the sensors produce a digital image that is not able to reproduce the real world light inten-sities. Similarly, monitors, printers and current displays do not recover of a wide tonal range. Theapproach proposed in this thesis is multiple acquisitions, taken with different exposure times, to over-come the limitations of the standard devices.To temporally adapt the light conditions, the study of algorithms dedicated to the high dynamic rangetechniques is performed. Our new smart camera system is able to capture, generate and showcontent in a highly parallelizable context for a real time processingDIJON-BU Doc.électronique (212319901) / SudocSudocFranceF

    An affordable contactless security system access for restricted area

    No full text
    International audienceWe present in this paper a security system based on identity verification process and a low-cost smart camera , intended to avoid unauthorized access to restricted area. The Le2i laboratory has a longstanding experience in smart cameras implementation and design [1], for example in the case of real-time classical face detection [2] or human fall detection [3]. The principle of the system, fully thought and designed in our laboratory, is as follows: the allowed user presents a RFID card to the reader based on Odalid system [4]. The card ID, time and date of authorized access are checked using connection to an online server. In the same time, multi-modality identity verification is performed using the camera. There are many ways to perform face recognition and face verification. As a first approach, we implemented a standard face localization using Haar cascade [5] and verification process based on Eigenfaces (feature extraction), with the ORL face data base (or AT&T) [6], and SVM (verification) [7]. On the one hand, the training step has been performed with 10-folds cross validation using the 3000 first faces from LFW face database [8] as unauthorized class and 20 known faces were used for the authorized class. On the other hand, the testing step has been performed using the rest of the LFW data base and 40 other faces from the same known person. The false positive and false negative rates are respectively 0,004% and 1,39% with a standard deviation of respectively 0,006% and 2,08%, considering a precision of 98,9% and a recall of 98,6%. The current PC based implementation has been designed to be easily deployed on a Raspberry Pi3 or similar based target. A combination of Eigenfaces [9], Fish-erfaces [9] , Local Binary Patterns [9] and Generalized Fourier Descriptors [10] will be also studied. However, it is known that the use of single modality such as standard face luminosity for identity control leads often to ergonomics problems due to the high intra-variability of human faces [11]. Recent work published in the literature and developed in our laboratory showed that it is possible to extract precise multispectral body information from standard camera. The next step and originality of our system will resides in the fact that we will consider Near Infrared or multi-spectral approach in order to improve the security level (decrease false positive rate) as well as ergonomics (decrease false negative rate). The proposed platform enables security access to be improved and original solutions based on specific illumination to be investigated

    Conception fonctionnelle d'une caméra intelligente pour le traitement vidéo temps réel

    Get PDF
    - La vision assistée par odinateur occupe un rôle de plus en plus important dans notre société, comme dans la sécurité des biens et des personnes, la production industrielle, les télécommunications, la robotique, ... . Cependant, les développements techniques sont encore timides et ralentis par des facteurs aussi différents que le coût des capteurs, le manque de flexibilité, la difficulté à développer rapidement des applications complexes et robustes ou encore la faiblesse de ces systèmes à interagir entre eux ou avec leur environnement. Ce papier présente notre concept de caméra intelligente dotée de capacités de traitement video temps réel. Un capteur CMOS, un processeur et une unité reconfigurable sont associés sur le même chip pour offrir flexibilité et hautes performances
    corecore